-
Notifications
You must be signed in to change notification settings - Fork 1.9k
FIX: Removed duplicate convolution for DoRA #2153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thanks for this PR. We're aware of this potential inefficiency but I think it's not as easy as re-using the base result. The reasoning is explained here. Back then, we were only dealing with linear layers but I'm certain that the same logic applies to convolutional layers. The good news is that this optimization is indeed possible if dropout is set to 0 or if we're in eval mode, see #2122. LMK what you think. |
Thanks for the clarification! Would it be possible to apply the dropout for DoRA similar to how LoRA handles it, i.e. in Also, what do you think of the fix for convolutional layers using the "groups" argument? |
Could you clarify what you mean here, maybe with a small code example? Note that we have to ensure that the implementation sticks with the specification of the original paper. When we have no dropout, though, we should be able to make the same optimization as in #2122 though.
I wasn't aware of the |
So the reasoning behind why the DoRA optimization is not possible when we use
After looking at the DoRA paper, I can't figure out why this is an issue. If we see DoRA as a Based on the paper, why would we need to compute the full-rank convolution again with a |
Note that when we have LoRA+DoRA+dropout, we ensure that dropout is consistently applied to the LoRA part and the "base_result" part. If we use the |
I think I understand your point. But if we look at LoRA (e.g. ln 1120 in layer.py) we see, that we also don't apply the lora_dropout to the
So my question is wether the "base_result" part even needs the dropout for DoRA. And if we do need the dropout in the "base_result" part, why do we not need it for LoRA? |
Exactly. The In your proposed code, we would instead use the Only if there is no dropout can we re-use the base You can also create a LoRA layer with DoRA and check that the outputs differ when dropout is applied between the old code and the suggested change (fix the seed to ensure that there is no randomness involved). |
Yes, I understand the reasoning and that my suggestion would produce a different output. I just could not find the reasoning for why the dropout needs to be applied like this in the DoRA paper. But I'll assume you're right. The reason why I am questioning this is because we do not seem to use the dropout in the same way for LoRA. So my question is: Why do we not apply the dropout to the
|
Okay, I understand now. Our calculation for DoRA is a bit more complicated than if we simply followed the equation from the paper. The reason is that we first calculate the base Trying to fit this into an equation, this is what I get (note that I omitted the scale for simplicity): If we did not have the dropout in the base result part, then the first 2 terms in the final equation, Not sure if @nbasyl would have time to check this. |
Right! And if we look at the implementation of LoRA (ln 1121 in layer.py), we see that there we also can't simplify the term Since the dropout we are talking about here is actually a |
Any updates on this? |
Not sure if Shih-Yang currently has time to look into this. In the meantime, how about opening a separate PR for the |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
not stale |
Hi @gslama12 @BenjaminBossan, apologies for the delayed response—I was completely swamped with schoolwork last month. My initial intuition regarding the DoRA implementation was to ensure that dropout(x) applies "only" to all trainable parameters, including not just the LoRA components but also the magnitude vector m. This is why I wanted to keep the base result intact. However, I now see your point that the non-zero difference in W x - W drop(x) in the final result equation, when lora_dropout != 0, is indeed a bit unintended. I hadn’t noticed that issue until you pointed it out. To address this, I think we could modify the DoRA implementation so that the magnitude vector in the first term of DoRA is detached from the gradient graph, ensuring it’s unaffected by dropout. For example, we could try something like this: |
Thanks a lot for your comment @nbasyl, what you say makes sense. I agree that we should not change the current implementation, as it would break reproducibility of training DoRA adapters. Instead, as you suggested, when there is an improvement, we can later implement it as a new adapter or as a new option for LoRA/DoRA. @gslama12 Would you be interested in implementing the optimization for the non-dropout case for convolution layers, similar to #2122? |
@nbasyl Thanks for the reply! |
@gslama12 Are you still interested in working on this? Also, the other issue reported of PEFT not honoring the |
@BenjaminBossan Yes! I am currently quite swamped with schoolwork but I will surely get to it in February if that is fine for you. I will also look into the groups issue then and see what can be done there. |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. |
not stale |
Reopened as #2371 |
The DoRA optimization for conv layers is now merged. Please LMK if you still plan on working on adding the |
Yes I will think about the groups fix, but it might be slightly more complex as we need to also consider this when merging the weights. |
Good point. In case that it's not easily solved, we can still support it, raise an error when trying to merge, and probably also add a warning at init time that the model using |
Ok that sounds like a good plan. I will let you know when i have worked something out. |
@gslama12 I did some experiments, I wonder if it is not actually quite simple. Here is the diff I came up with: modified src/peft/tuners/lora/layer.py
@@ -1058,10 +1058,16 @@ class _ConvNd(nn.Module, LoraLayer):
kernel_size = base_layer.kernel_size
stride = base_layer.stride
padding = base_layer.padding
+ groups = base_layer.groups
+ if (self.in_features % groups != 0) or (self.out_features % groups != 0):
+ raise ValueError("TODO this should not happen")
+ if r % groups != 0:
+ raise ValueError("TODO rank must be divisible by groups")
+
conv_layer = type(base_layer)
out_kernel = out_stride = (1,) * (self._kernel_dim - 2)
- self.lora_A[adapter_name] = conv_layer(self.in_features, r, kernel_size, stride, padding, bias=False)
- self.lora_B[adapter_name] = conv_layer(r, self.out_features, out_kernel, out_stride, bias=lora_bias)
+ self.lora_A[adapter_name] = conv_layer(self.in_features, r, kernel_size, stride, padding, bias=False, groups=groups)
+ self.lora_B[adapter_name] = conv_layer(r, self.out_features, out_kernel, out_stride, bias=lora_bias, groups=groups)
self.lora_bias[adapter_name] = lora_bias
if use_rslora:
@@ -1231,14 +1237,17 @@ class _ConvNd(nn.Module, LoraLayer):
weight_A = self.lora_A[adapter].weight
weight_B = self.lora_B[adapter].weight
+ base_layer = self.get_base_layer()
if cast_to_fp32:
weight_A = weight_A.float()
weight_B = weight_B.float()
# https://github.com/bmaltais/kohya_ss/blob/feb6728762a8f463d15ba936d189d4c3abfaa1ab/networks/lora.py#L117
- if self.get_base_layer().weight.size()[2:4] == (1, 1):
- # conv2d 1x1
+ is_1x1 = base_layer.weight.size()[2:] == (1,) * (self._kernel_dim - 2)
+ if is_1x1:
+ if base_layer.groups != 1:
+ raise ValueError("TODO")
output_tensor = (weight_B.squeeze(3).squeeze(2) @ weight_A.squeeze(3).squeeze(2)).unsqueeze(2).unsqueeze(
3
) * self.scaling[adapter]
@@ -1247,6 +1256,7 @@ class _ConvNd(nn.Module, LoraLayer):
self.conv_fn(
weight_A.transpose(0, 1),
weight_B,
+ groups=base_layer.groups,
).transpose(0, 1)
* self.scaling[adapter]
) It's just quick and dirty, but in my initial testing it seems to work. Unsolved so far is the 1x1 conv part, since we're not using the Overall, I wonder how useful it is to apply LoRA to conv layers with groups, since they are already quite parameter efficient. But it shouldn't hurt to include it. |
I actually tried to reproduce the error that I originally got for DoRA when using conv layers with groups and to my suprise I couldn't. There might be something that has changed since, as it seems to work fine with the current implementation. Regarding your last point: Yes conv layers that use groups are quite efficient during inference but during training lora can still reduce FLOPs and memory by quite a bit. |
Is that including merging? Also, just because there is no error does not mean it works correctly.
I didn't do the math, but it's good to know it can still be useful. |
Yes, also no error for merging. But of course, as you stated, this does not mean that it works correctly. The fix I had previously was basically changing the
|
I had indeed also tried to apply groups only to |
I just revisited this issue and I think you suggestion from here is actually something different to what I intended. In your approach you actually transfer the groups argument to the lora layers, making the adapter also a conv layer with groups. I think the adapter does not need to have a groups argument, because it will have very few trainable parameters anyway, meaning we do not need the additional efficiency that the groups argument can provide here. Instead my idea was to fix the dimensions of the lora_A and lora_B weights, such that they do not cause issues when the base_layer has groups. I think taking the approach from the Microsoft LoRA implementation is a safe bet. I have opened this in a separate PR in #2403. |
This pull request fixes two problems:
1. Duplicate Convolution in DoRA implementation for ConvNd Layers:
Since the base layer convolution is already computed in
layer.py
, we don't need to compute it again in thedora.py
. Computing it again doubles the FLOPs consumption during the forward pass resulting in significantly higher FLOPs overall. We can pass the result from the base layer computed inlayer.py
to the forward pass of the_DoraConvNdLayer
indora.py
and save computational resources.2. Bugfix for DoRA regarding Convolutional Layers using the Groups Argument:CNNs that for example use depthwise separable convolutional layers result in an error when applying DoRA. Adjusting the dimension of the
conv_layer
inlayer.py
fixes this issue.